statistical dimension
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Hardware (0.67)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- (2 more...)
- North America > United States > New York > Kings County > New York City (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Oceania > Australia (0.04)
- (7 more...)
- North America > United States > Georgia > Fulton County > Atlanta (0.05)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
On the role of entanglement and statistics in learning (Supplementary material)
Note that this is defined up to an absolute phase, i.e. Learning models In this section we first describe the learning models we will be concerned with in this paper. Such quantum examples have been investigated in prior works [6, 8, 9]. A natural way to extend the learning model is to allow the algorithm quantum statistical queries . QSQ model allows a quantum advantage in learning in this framework.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Hardware (0.67)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (3 more...)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper gives a new convex relaxation for sparse matrix factorization. They show the new relaxation has better denoising performance and lower statistical dimension compared to traditional l_1 or trace norms. However, the new convex relaxation is hard to compute. The paper also involves many potential applications, experiments on denosing performance and application to predict drug-target interactions.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- (5 more...)
Tight convex relaxations for sparse matrix factorization
Based on a new atomic norm, we propose a new convex formulation for sparse matrix factorization problems in which the number of nonzero elements of the factors is assumed fixed and known. The formulation counts sparse PCA with multiple factors, subspace clustering and low-rank sparse bilinear regression as potential applications. We compute slow rates and an upper bound on the statistical dimension of the suggested norm for rank 1 matrices, showing that its statistical dimension is an order of magnitude smaller than the usual l_1-norm, trace norm and their combinations. Even though our convex formulation is in theory hard and does not lead to provably polynomial time algorithmic schemes, we propose an active set algorithm leveraging the structure of the convex problem to solve it and show promising numerical results.
- North America > United States > New York > Kings County > New York City (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Oceania > Australia (0.04)
- (7 more...)